AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Efficient Text Encoding

# Efficient Text Encoding

Olmo2 8B SuperBPE T180k
Apache-2.0
An 8-billion-parameter large language model featuring the innovative SuperBPE tokenizer, achieving 27% higher efficiency than traditional BPE models
Large Language Model Transformers English
O
UW
160
6
Minilm L3 H384 Uncased
MIT
This is a streamlined 3-layer version of the Microsoft MiniLM-L12-H384-uncased model, retaining only layers [3,7,11] from the original model.
Large Language Model Transformers
M
nreimers
324
3
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase